465 research outputs found

    Time-Aware Probabilistic Knowledge Graphs

    Get PDF
    The emergence of open information extraction as a tool for constructing and expanding knowledge graphs has aided the growth of temporal data, for instance, YAGO, NELL and Wikidata. While YAGO and Wikidata maintain the valid time of facts, NELL records the time point at which a fact is retrieved from some Web corpora. Collectively, these knowledge graphs (KG) store facts extracted from Wikipedia and other sources. Due to the imprecise nature of the extraction tools that are used to build and expand KG, such as NELL, the facts in the KG are weighted (a confidence value representing the correctness of a fact). Additionally, NELL can be considered as a transaction time KG because every fact is associated with extraction date. On the other hand, YAGO and Wikidata use the valid time model because they maintain facts together with their validity time (temporal scope). In this paper, we propose a bitemporal model (that combines transaction and valid time models) for maintaining and querying bitemporal probabilistic knowledge graphs. We study coalescing and scalability of marginal and MAP inference. Moreover, we show that complexity of reasoning tasks in atemporal probabilistic KG carry over to the bitemporal setting. Finally, we report our evaluation results of the proposed model

    Reasoning and Change Management in Modular Ontologies

    Get PDF
    The benefits of modular representations are well known from many areas of computer science. In this paper, we concentrate on the benefits of modular ontologies with respect to local containment of terminological reasoning. We define an architecture for modular ontologies that supports local reasoning by compiling implied subsumption relations. We further address the problem of guaranteeing the integrity of a modular ontology in the presence of local changes. We propose a strategy for analyzing changes and guiding the process of updating compiled information

    Towards Log-Linear Logics with Concrete Domains

    Full text link
    We present MEL++\mathcal{MEL}^{++} (M denotes Markov logic networks) an extension of the log-linear description logics EL++\mathcal{EL}^{++}-LL with concrete domains, nominals, and instances. We use Markov logic networks (MLNs) in order to find the most probable, classified and coherent EL++\mathcal{EL}^{++} ontology from an MEL++\mathcal{MEL}^{++} knowledge base. In particular, we develop a novel way to deal with concrete domains (also known as datatypes) by extending MLN's cutting plane inference (CPI) algorithm.Comment: StarAI201

    Ontological Engineering for the Cadastral Domain

    Get PDF

    Spatial Reasoning for the Semantic Web -Use Cases and Technological Challenges

    Get PDF
    The goal of semantic web research is to turn the World-Wide Web into a Web of Data that can be processed automatically to a much larger extend than possible with traditional web technology. Important features of the solution currently being developed is the ability to link data from from different sources and to provide formal definitions of the intended meaning of the terminology used in different sources as a basis for deriving implicit information and for conflict detection. Both requires the ability to reason about the definition of terms. With the development of OWL as the standard language for representing terminological knowledge, reasoning in description logics has been determined as the major technique for performing this reasoning So far little attention has been paid to the problem of representing and reasoning about space and time on the semantic web. In particular, existing semantic web languages are not well suited for representing these aspects as they require to operate over metric spaces that behave fundamentally different from the abstract interpretation domains description logics are based on. Nevertheless, there is a strong need to integrate reasoning about space and time into existing semantic web technologies especially because more and more data available on the web has a references to space and time. Images taken by digital cameras are a good example of such data as they come with a time stamp and geographic coordinates. In this paper, we concentrate on spatial aspects and discuss different use case for reasoning about spatial aspects on the (semantic) web and possible technological solutions for these use cases. Based on these discussions we conclude that the actual open problem is not existing technologies for terminological or spatial reasoning, but the lack of an established mechanism for combining the two. The Case for Spatial Queries One of the most central functionality that should be supported by semantic web technology is query answering over web data. The primary language for this purpose i

    Political Text Scaling Meets Computational Semantics

    Full text link
    During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.Comment: Updated version - accepted for Transactions on Data Science (TDS

    A purely logic-based approach to approximate matching of Semantic Web Services

    Full text link
    Most current approaches to matchmaking of semantic Web services utilize hybrid strategies consisting of logic- and non-logic-based similarity measures (or even no logic-based similarity at all). This is mainly due to pure logic-based matchers achieving a good precision, but very low recall values. We present a purely logic-based matcher implementation based on approximate subsumption and extend this approach to take additional information about the taxonomy of the background ontology into account. Our aim is to provide a purely logic-based matchmaker implementation, which also achieves reasonable recall levels without large impact on precision

    Representation of Semantic Mappings

    Get PDF
    The aim of this breakout session was to chart the landscape of existing approaches for representing mappings between heterogeneous models, identify common ideas and formulate research questions to be addressed in the future. In the session, the discussion mainly concerned three aspects: The nature of mappings, existing proposals for mappings and open research questions
    • …
    corecore